<?xml version="1.0" encoding="ISO-8859-1"?>
<metadatalist>
	<metadata ReferenceType="Conference Proceedings">
		<site>sibgrapi.sid.inpe.br 802</site>
		<holdercode>{ibi 8JMKD3MGPEW34M/46T9EHH}</holdercode>
		<identifier>6qtX3pFwXQZeBBx/wkypH</identifier>
		<repository>sid.inpe.br/banon/2002/12.02.11.07</repository>
		<lastupdate>2002:11.14.02.00.00 sid.inpe.br/banon/2001/03.30.15.38 administrator</lastupdate>
		<metadatarepository>sid.inpe.br/banon/2002/12.02.11.07.22</metadatarepository>
		<metadatalastupdate>2022:06.14.00.12.24 sid.inpe.br/banon/2001/03.30.15.38 administrator {D 2001}</metadatalastupdate>
		<doi>10.1109/SIBGRAPI.2001.963054</doi>
		<citationkey>GonçalvesKallThal:2001:ApSoAu</citationkey>
		<title>Programming behaviors with local perception and smart objects: an approach to solve autonomous agents tasks</title>
		<year>2001</year>
		<numberoffiles>1</numberoffiles>
		<size>1036 KiB</size>
		<author>Gonçalves, Luiz M. G.,</author>
		<author>Kallmann, Marcelo,</author>
		<author>Thalmann, Daniel,</author>
		<editor>Borges, Leandro Díbio,</editor>
		<editor>Wu, Shin-Ting,</editor>
		<conferencename>Brazilian Symposium on Computer Graphics and Image Processing, 14 (SIBGRAPI)</conferencename>
		<conferencelocation>Florianópolis, SC, Brazil</conferencelocation>
		<date>15-18 Oct. 2001</date>
		<publisher>IEEE Computer Society</publisher>
		<publisheraddress>Los Alamitos</publisheraddress>
		<pages>184-191</pages>
		<booktitle>Proceedings</booktitle>
		<tertiarytype>Full Paper</tertiarytype>
		<organization>SBC - Brazilian Computer Society</organization>
		<transferableflag>1</transferableflag>
		<versiontype>finaldraft</versiontype>
		<keywords>computer animation, virtual agent, smart objects.</keywords>
		<abstract>We propose an integrated framework in which local perception and close manipulation skills are used in conjunction with a high-level behavioral interface based on smart objects paradigm as support for a virtual agent to perform autonomous tasks. In our model, virtual "smart objects" encapsulate information about possible high-level interactions between the agent and its environment, including sub-tasks defined by scripts that the agent can perform. We use the information provided by low-level sensing mechanisms to construct a set of local, perceptual features, with which to categorise at run-time possible target objects. Once objects are activated, based on their interactivity information and on the current task plan, the agent can reparametrize its behavior, according to its mission goal defined in a global plan script. A challenging problem solved here is the construction (abstraction) of the mechanism to link individual perceptions to actions. As a practical result virtual agents are capable of acting with more atonomy, enhancing their performance.</abstract>
		<language>en</language>
		<targetfile>184-191.pdf</targetfile>
		<usergroup>administrator</usergroup>
		<visibility>shown</visibility>
		<nexthigherunit>8JMKD3MGPEW34M/46Q6TJ5</nexthigherunit>
		<nexthigherunit>8JMKD3MGPEW34M/4742MCS</nexthigherunit>
		<citingitemlist>sid.inpe.br/sibgrapi/2022/04.29.19.35 9</citingitemlist>
		<hostcollection>sid.inpe.br/banon/2001/03.30.15.38</hostcollection>
		<notes>The conference was held in Florianópolis, SC, Brazil, from October 15 to 18.</notes>
		<lasthostcollection>sid.inpe.br/banon/2001/03.30.15.38</lasthostcollection>
		<url>http://sibgrapi.sid.inpe.br/rep-/sid.inpe.br/banon/2002/12.02.11.07</url>
	</metadata>
</metadatalist>